有效的人类组合需要能够传达团队目标的能力和您需要代理商进行操作的约束。提供指定团队共享意图或操作标准的能力,可以使AI代理执行其主要功能,同时仍然能够满足当前团队的特定愿望。尽管已经开展了重要的工作来指导代理通过语言或演示执行任务,但先前的工作缺乏专注于可以在团队指定的参数中运行的建筑物。更糟糕的是,缺乏有关使人类通过非结构化的自然主义语言提供其规范的研究。在本文中,我们建议将目标和约束用作调节和评估自治药物的脚手架。我们通过介绍一个新颖的数据集和相关的数据收集协议来为这一领域做出贡献,该协议将语言描述映射到与人参与者为棋盘游戏风险开发的特定策略相对应的目标和约束。利用最先进的语言模型和增强程序,我们开发了一个机器学习框架,该框架可用于从非结构化策略描述中识别目标和约束。为了验证我们的方法,我们进行了一项人为主体研究,以建立我们的数据集的人类基础。我们的结果表明,与执行同一机器翻译任务的人类评估者相比,我们的机器学习体系结构能够更好地将非结构化语言描述解释为策略规范(F(1,272.53)= 17.025,p <0.001)。
translated by 谷歌翻译
Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large-scale diffusion models have achieved state-of-the-art results on text-to-image synthesis (T2I) tasks. Despite their ability to generate high-quality yet creative images, we observe that attribution-binding and compositional capabilities are still considered major challenging issues, especially when involving multiple objects. In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions. To do this, we incorporate linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based T2I models. We observe that keys and values in cross-attention layers have strong semantic meanings associated with object layouts and content. Therefore, we can better preserve the compositional semantics in the generated image by manipulating the cross-attention representations based on linguistic insights. Built upon Stable Diffusion, a SOTA T2I model, our structured cross-attention design is efficient that requires no additional training samples. We achieve better compositional skills in qualitative and quantitative results, leading to a 5-8% advantage in head-to-head user comparison studies. Lastly, we conduct an in-depth analysis to reveal potential causes of incorrect image compositions and justify the properties of cross-attention layers in the generation process.
translated by 谷歌翻译
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts. Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework. Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55\% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09\% and 25.08\% relative improvements across three few-shot scenarios on unseen test sets respectively.
translated by 谷歌翻译
有几篇论文正确包括人工智能(AI)培训数据中的少数群体,以改善对少数群体和/或一般社会的测试推论。一般社会由少数派和多数利益相关者组成。一个普遍的误解是,少数群体的包容性不会单独提高多数群体的绩效。在本文中,我们令人惊讶的是,包括少数样本可以改善多数族裔的测试错误。换句话说,少数群体的包容性会导致多数群体增强(MIME)的性能。给出了哑剧效应的理论存在证明,并发现与六个不同数据集的实验结果一致。项目网页:https://visual.ee.ucla.edu/mime.htm/
translated by 谷歌翻译
从一个人的错误中学习是一种有效的人类学习技术,学习者更多地关注在犯错误的主题上,以便加深他们的理解。在本文中,我们调查这种人类学习策略是否可以应用于机器学习。我们提出了一种新的机器学习方法,称为来自错误(LFM)的学习,其中学习者通过在修订期间更多地关注错误来提高其学习的能力。我们制定LFM作为三阶段优化问题:1)学习者学习;2)学习者重新学习专注于错误,而且;3)学习者验证其学习。我们开发了一种有效的算法来解决LFM问题。我们将LFM框架应用于CiFar-10,CiFar-100和ImageNet上的神经架构搜索。实验结果强烈展示了我们模型的有效性。
translated by 谷歌翻译
因果检测在自然语言处理和语言学研究领域吸引了很多关注。它具有信息检索,事件预测,问题回答,财务分析和市场研究的基本应用。在本研究中,我们探讨了几种方法来使用变压器识别和提取金融文件中的原因对。为此目的,我们提出了一种与BIO方案结合POS标记的方法,可以与现代变压器模型集成,以解决识别给定文本中的因果关系的这一挑战。我们的最佳方法学达到0.9551的F1分,在Fincausal-2021在Fincausal-2021研讨会上的盲试验中精确匹配得分为0.8777。
translated by 谷歌翻译
字体遍布文档普遍存在,有各种风格。它们以本机向量格式表示或光栅化以产生固定分辨率图像。在第一种情况下,非标准表示可防止受益于最新网络架构进行神经表示;虽然在后一种情况下,在通过网络编码时,光栅化表示导致数据保真度丢失,作为像边缘和角落的字体特定的不连续性难以使用神经网络代表。基于观察到复杂字体可以通过一组更简单的占用函数的叠加来表示,我们介绍\ texit {multi-inclicicits}以将字体表示为置换不变集的学习隐含功能,而不会丢失特征(例如,棱角)。然而,虽然多种含义本地保护字体特征,但以地面真理多通道信号的形式获得监控是本身的问题。相反,我们提出了如何只用本地监督培训这种表示,而建议的神经架构直接发现字体系列的全球一致的多型多种含义。我们广泛地评估了各种任务的建议代表,包括重建,插值和综合,以证明具有现有替代品的明显优势。另外,表示自然地启用字形完成,其中单个特征字体用于在目标样式中综合整个字体系列。
translated by 谷歌翻译